perm filename REVIEW[RDG,DBL] blob
sn#600192 filedate 1981-07-20 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00004 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Evans stuff
C00006 00003 Review of Fodor's article in January 81 "Scientific American".
C00012 00004 Critical comments on HEURS paper
C00017 ENDMK
C⊗;
Evans stuff
Early 1960s - reported in SIP by Minsky
First program to use Analogy (explicitly)
Task: IQ type A:B :: C:?, where ? one of 5 figures (P 273)
Process: 290-294
---- Part 1 ----
1) Describe all Figures (A-C, 1-5)
Finds properties, relations of objects in each figure
2) Find transformations: A→B, C→i ∀i;
also A→C, B→C, & A→i, B→i ∀i
This object-in-1 is like object-in-2, when rotated, scaled, ...
---- Part 2 ----
3) a) Find mapping from A→B, consistent with relations (variabulize)
[MATCH, ADD, REMOVE objects]
b) Find mapping from C→i, consistent with relations
4) Try A→B rule on C; to see if get i.
5) Iteratively weaken A→B xform (by eliminating conjuncts)
until corresponds to unique C→j.
This is least weak, and j is answer.
Comments:
General:
No looping back to description - used seperate modules (IBM cards)
Form required, but actual relations arbitrary.
Specific:
re: 1) - Based on fixed language,
Positive properties
Really 0th step, for decompostion.
re: 3) - Non recursive - either equal or not
Conjunct of properties
re: 4) - An optimization:
3.5) Throw out C→i if #'s don't match.
If all gone, relax.
re: 5) - Order heuristic - to correspond to humans
If non-unique, try again.
Weaknesses:
Only conjuncts of positive traits
[Eg not No object of A is = object of B]
Unable to recur - objects either EQUAL or not.
Unable to adapt - ie heuristics used built-into code
[Eg another order of attempts, how to weaken conjunct]
Unable to return to decomposing step.
Limited in scope to discriminary -
Able to find best from sample, not "good".
In summary: totally syntactic - no world knowledge, just other smarts.
Strengths:
Did work - over large case
Knew some of these weaknesses - esp 4.
Arbitrary relations (for part 2, eg shading)
Some good optimizations
Final comments:
1. This first big LISP program - justifies PLISTs, etc.
2. Granddaddy of most ANALOGY programs
Worked symbolically.
Well (objectively) described.
- of problem, future directions and implementation.
Review of Fodor's article in January 81 "Scientific American".
"The Mind-Body Problem"
Overhead
Fodor's article in January 81 "Scientific American".
(After overviewing this article, I'll discuss its content)
Prof @ MIT in Psychology & Philosophy/Linguistics
Motivation - Philosophical article
Mentioned AI in intro
I never understood dualism vs functionalism...,
& these seemed relevant to discussions re: intelligent machines/people
Organization:
Intro
Describes & Critiques Prior Explanations
The answer - functionalism
Futher issues of Mind
(And I'll just give quick paraphrase of this, in order.
See diagram)
Content
-------------------------------------------------------
-------------------------------------------------------
History
Philosophy recent interest in explaining Psychological states
Is mind material, or etherial?
Dualist vs Materialists [Behavior & Identity]
↑ Watson
Dualist: Mind is non-physical
Problem: Violation of physical laws (conservation...)
Mind-body causality
Fallen from favor for other reasons as well.
Radical Behaviorist:
Character:
Only stimulus/response
Role of Psy is catalog S/Rs.
Plus:
Better than ghosts
Problem:
No talk of mental causes/states, ...
Logical Behaviorist: Provides semantics to mental states:
Character:
Equates mental states with behavioral disposition
Every mental ascription ≡ (in meaning) to If/Then rule
Translates mental language into language of S/R
Plus:
Provides a materialistic account of mental causation
Problem:
Insists there are NO mental causes
Does not account for all interactions [between of mental states] 116
Does not allow abstraction
Requires open-ended # behavior hypotheticals
to spell out the behavior dispositions expressed by mental term
Central-State Identity Theorist:
Character:
Mental causes ≡ neurophysiological events in brain
Plus:
Can have totally interal interactions [NOT leading to behaviour]
So mental processes really physical
Problem:
Need abstraction - above level of neuron!
Hardware based - how about Software?
Functionalist - from Cognitive Science
Character:
Not what stuff is made of, but how arranged.
View of information processor - with states...
Stuff comes in, states change, stuff goes out.
Plus:
Compatible with best of LB & C-SIT, but independent of material
(and so extendable to other systems)
Problem:
Not untruth, just triviality
[If states only functionally defined, then like Homonculi ]
Answer: must suggest mechanism itself
Here Turing machine (well defined computation on discrete symbols)
---------------------------------------------
---------------------------------------------
Questions re: Functionalism -
1) Not limited to minds
2) Qualitative vs Quantitative Content [inverted spectrum - green/red]
- if functionally the same, how to distinguish?
3) Handling of intentional content of mental states
(propositions) - Functionalism has done well here.
Basically: Symbols (they also have intentional content)
4) Issues of representation(al theory of the mind)
Resemblence, and that set of problems ("tall"ness of John)
Here Functionalism helps: semantics depending on function
(↑ This seems central issue to empirical theories of the mind)
------------------------------------------------
------------------------------------------------
Token Physicalism: All mental particular WHICH HAPPEN TO EXIST are
neurophysiological.
Type Physicalism: The only mental particulars must be neurophysiological.
Hence Type P. dismisses machines & disembodies spirits, as no neurons
Critical comments on HEURS paper
∂TO DBL 6-Oct-80
I already mentioned the claring lack of a solid example, with which the
reader can begin to understand these axes. This might be a tricky task,
as I'm not convinced your diagram will work, for the following reasons:
[I'm not saying it wouldn't; only that you haven't provided a proof of the
following essential points:]
1) Yes, you did justify that for most tasks, any given heuristic will have
a small negative value -- corresponding to the time to evaluate the IF clause.
[I would claim that value is not as negative as one might think, as the fact that
H#37 knows it doesn't apply to Task#44 is itself useful information, which a
meta-level observer might use to great advantage.]
2) Yes, I accept the premise that if the IF part of a heuristic is mis-tuned
then the user will lose by applying that rule -- and so the utility will be
negative, and perhaps fairly large.
3) I do believe that SIMILIAR heuristics will apply to similiar tasks. It
requires a small leap of faith to accept the premise that one could apply
the SAME rule-of-thumb to more than one task, and in more than one domain.
This essential presumption could be established by claiming (as we have)
that each rule is really a fairly general entity, which may be passed
a set of parameters which fits the general
principle to the specific case. (See UNITS Spec relation.)
Notice an important side-effect springs from the idea of "passing the
domain specifics" as an argument:
This means it is meaningful to discuss a Task axis, as done in the paper,
and helps define it as well; or at least provide
a language in which such a definition can be stated.
4) There are a lot of questionable assumptions you made about the graph
of task as function of utility. (This is one of the problems will applying the
"When in doubt, draw a diagram" heuristics to as fuzzy a field as heuristics...)
a) Even granting that there is a well defined domain of tasks, it's not clear
that f#23, which maps the utility of H#23 as a function of task, will be
continuous.
b) Why do you assume that all n*m graphs of all N heuristics will have
a single hump? I'll grant that for H#19, one
can twiddle the axes of the (i,j) graph to move all the humps into one
contiguous region; but this does NOT imply that the other n-1 graphs, using
this same scale for the abscissa, will also have a single peak.
An this says nothing about the other N-1 hueristics - in particular of the shape
of each of their n*m graphs, using these coordinates.
------
One single well-described example will help the reader thru points 1-3 above.
Two such examples are a necessary first stab at defining the space, and
proving (or possibly disproving) your empirical-sounding claims about
the shape and niceness of that space.
Russ